Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
BMC Med Imaging ; 24(1): 79, 2024 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-38580932

RESUMEN

Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data. This review summarizes recent research into its usage in X-ray, computed tomography, magnetic resonance, and ultrasound imaging, concentrating on studies that compare self-supervised pretraining to fully supervised learning for diagnostic tasks such as classification and segmentation. The most pertinent finding is that self-supervised pretraining generally improves downstream task performance compared to full supervision, most prominently when unlabelled examples greatly outnumber labelled examples. Based on the aggregate evidence, recommendations are provided for practitioners considering using self-supervised learning. Motivated by limitations identified in current research, directions and practices for future study are suggested, such as integrating clinical knowledge with theoretically justified self-supervised learning methods, evaluating on public datasets, growing the modest body of evidence for ultrasound, and characterizing the impact of self-supervised pretraining on generalization.


Asunto(s)
Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Humanos , Rayos X , Radiografía , Ultrasonografía
2.
Crit Care Med ; 51(2): 301-309, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36661454

RESUMEN

OBJECTIVES: To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients. DESIGN: Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device. Clip-level model predictions were analyzed and compared with blinded expert review for A versus B line pattern. Four prediction thresholding approaches were applied to maximize model sensitivity and specificity at bedside. SETTING: Academic ICU. PATIENTS: One-hundred critically ill patients admitted to ICU, receiving oxygen therapy, and eligible for respiratory imaging were included. Patients who were unstable or could not undergo an LUS examination were excluded. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: A total of 100 unique ICU patients (400 clips) were enrolled from two tertiary-care sites. Fifty-six patients were mechanically ventilated. When compared with gold standard expert annotation, the real-time inference yielded an accuracy of 95%, sensitivity of 93%, and specificity of 96% for identification of the B line pattern. Varying prediction thresholds showed that real-time modification of sensitivity and specificity according to clinical priorities is possible. CONCLUSIONS: A previously validated DL classification model performs equally well in real-time at the bedside when platformed on a portable device. As the first study to test the feasibility and performance of a DL classification model for LUS in a dedicated ICU environment, our results justify further inquiry into the impact of employing real-time automation of medical imaging into the care of the critically ill.


Asunto(s)
Enfermedad Crítica , Aprendizaje Profundo , Humanos , Estudios Prospectivos , Enfermedad Crítica/terapia , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Unidades de Cuidados Intensivos
3.
Diagnostics (Basel) ; 12(10)2022 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-36292042

RESUMEN

BACKGROUND: Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency. METHODS: We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset. Partitioning the remaining dataset by view would reduce downstream labelling efforts by enabling annotators to focus on annotating pathological features specific to each view. RESULTS: In a sample view-specific annotation task, we found that automatically partitioning a 780-clip dataset by view saved 42 min of manual annotation time and resulted in 55±6 additional relevant labels per hour. CONCLUSIONS: Automatic partitioning of a LUS dataset by view significantly increases annotator efficiency, resulting in higher throughput relevant to the annotating task at hand. The strategy described in this work can be applied to other hierarchical annotation schemes.

4.
Comput Biol Med ; 148: 105953, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35985186

RESUMEN

Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding. Using a labelled LUS dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence and absence of lung sliding were transformed into motion (M) mode images. These images were subsequently used to train a deep neural network binary classifier that was evaluated using a holdout set comprising 15% of the total data. Grad-CAM explanations were examined. Our binary classifier using the EfficientNetB0 architecture was trained using 2535 LUS clips from 614 patients. When evaluated on a test set of data uninvolved in training (540 clips from 124 patients), the model performed with a sensitivity of 93.5%, specificity of 87.3% and an area under the receiver operating characteristic curve (AUC) of 0.973. Grad-CAM explanations confirmed the model's focus on relevant regions on M-mode images. Our solution accurately distinguishes between the presence and absence of lung sliding artefacts on LUS.


Asunto(s)
Aprendizaje Profundo , Neumotórax , Artefactos , Humanos , Pulmón , Ultrasonografía
5.
Diagnostics (Basel) ; 11(11)2021 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-34829396

RESUMEN

Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.

6.
BMJ Open ; 11(3): e045120, 2021 03 05.
Artículo en Inglés | MEDLINE | ID: mdl-33674378

RESUMEN

OBJECTIVES: Lung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images. DESIGN: A convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians. SETTING: Two tertiary Canadian hospitals. PARTICIPANTS: 612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE). RESULTS: The trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01. CONCLUSIONS: A DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Edema Pulmonar/diagnóstico por imagen , Síndrome de Dificultad Respiratoria/diagnóstico por imagen , Canadá , Diagnóstico Diferencial , Humanos
7.
Int J Comput Assist Radiol Surg ; 15(11): 1835-1846, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32839888

RESUMEN

PURPOSE: In the context of analyzing neck vascular morphology, this work formulates and compares Mask R-CNN and U-Net-based algorithms to automatically segment the carotid artery (CA) and internal jugular vein (IJV) from transverse neck ultrasound (US). METHODS: US scans of the neck vasculature were collected to produce a dataset of 2439 images and their respective manual segmentations. Fourfold cross-validation was employed to train and evaluate Mask RCNN and U-Net models. The U-Net algorithm includes a post-processing step that selects the largest connected segmentation for each class. A Mask R-CNN-based vascular reconstruction pipeline was validated by performing a surface-to-surface distance comparison between US and CT reconstructions from the same patient. RESULTS: The average CA and IJV Dice scores produced by the Mask R-CNN across the evaluation data from all four sets were [Formula: see text] and [Formula: see text]. The average Dice scores produced by the post-processed U-Net were [Formula: see text] and [Formula: see text], for the CA and IJV, respectively. The reconstruction algorithm utilizing the Mask R-CNN was capable of producing accurate 3D reconstructions with majority of US reconstruction surface points being within 2 mm of the CT equivalent. CONCLUSIONS: On average, the Mask R-CNN produced more accurate vascular segmentations compared to U-Net. The Mask R-CNN models were used to produce 3D reconstructed vasculature with a similar accuracy to that of a manually segmented CT scan. This implementation of the Mask R-CNN network enables automatic analysis of the neck vasculature and facilitates 3D vascular reconstruction.


Asunto(s)
Arterias Carótidas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Venas Yugulares/diagnóstico por imagen , Algoritmos , Aprendizaje Profundo , Humanos , Ultrasonografía/métodos
8.
Healthc Technol Lett ; 6(6): 204-209, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32038858

RESUMEN

The authors present a deep learning algorithm for the automatic centroid localisation of out-of-plane US needle reflections to produce a semi-automatic ultrasound (US) probe calibration algorithm. A convolutional neural network was trained on a dataset of 3825 images at a 6 cm imaging depth to predict the position of the centroid of a needle reflection. Applying the automatic centroid localisation algorithm to a test set of 614 annotated images produced a root mean squared error of 0.62 and 0.74 mm (6.08 and 7.62 pixels) in the axial and lateral directions, respectively. The mean absolute errors associated with the test set were 0.50 ± 0.40 mm and 0.51 ± 0.54 mm (4.9 ± 3.96 pixels and 5.24 ± 5.52 pixels) for the axial and lateral directions, respectively. The trained model was able to produce visually validated US probe calibrations at imaging depths on the range of 4-8 cm, despite being solely trained at 6 cm. This work has automated the pixel localisation required for the guided-US calibration algorithm producing a semi-automatic implementation available open-source through 3D Slicer. The automatic needle centroid localisation improves the usability of the algorithm and has the potential to decrease the fiducial localisation and target registration errors associated with the guided-US calibration method.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...